#automated tools
Explore tagged Tumblr posts
jcmarchi · 25 days ago
Text
Ethics in automation: Addressing bias and compliance in AI
New Post has been published on https://thedigitalinsider.com/ethics-in-automation-addressing-bias-and-compliance-in-ai/
Ethics in automation: Addressing bias and compliance in AI
Tumblr media
As companies rely more on automated systems, ethics has become a key concern. Algorithms increasingly shape decisions that were previously made by people, and these systems have an impact on jobs, credit, healthcare, and legal outcomes. That power demands responsibility. Without clear rules and ethical standards, automation can reinforce unfairness and cause harm.
Ignoring ethics affects real people in real ways, not only changing degrees of public trust. Biased systems can deny loans, jobs, or healthcare, and automation can increase the speed of bad decisions if no guardrails are in place. When systems make the wrong call, it’s often hard to appeal or even understand why, and the lack of transparency turns small errors into bigger issues.
Understanding bias in AI systems
Bias in automation often comes from data. If historical data includes discrimination, systems trained on it may repeat those patterns. For example, an AI tool used to screen job applicants might reject candidates based on gender, race, or age if its training data reflects those past biases. Bias also enters through design, where choices about what to measure, which outcomes to favour, and how to label data can create skewed results.
There are many kinds of bias. Sampling bias happens when a data set doesn’t represent all groups, whereas labelling bias can come from subjective human input. Even technical choices like optimisation targets or algorithm type can skew results.
The issues are not just theoretical. Amazon dropped its use of a recruiting tool in 2018 after it favoured male candidates, and some facial recognition systems have been found to misidentify people of colour at higher rates than Caucasians. Such problems damage trust and raise legal and social concerns.
Another real concern is proxy bias. Even when protected traits like race are not used directly, other features like zip code or education level can act as stand-ins, meaning the system may still discriminate even if the input seems neutral, for instance on the basis of richer or poorer areas. Proxy bias is hard to detect without careful testing. The rise in AI bias incidents is a sign that more attention is needed in system design.
Meeting the standards that matter
Laws are catching up. The EU’s AI Act, passed in 2024, ranks AI systems by risk. High-risk systems, like those used in hiring or credit scoring, must meet strict requirements, including transparency, human oversight, and bias checks. In the US, there is no single AI law, but regulators are active. The Equal Employment Opportunity Commission (EEOC) warns employers about the risks of AI-driven hiring tools, and the Federal Trade Commission (FTC) has also signalled that biased systems may violate anti-discrimination laws.
The White House has issued a Blueprint for an AI Bill of Rights, offering guidance on safe and ethical use. While not a law, it sets expectations, covering five key areas: safe systems, algorithmic discrimination protections, data privacy, notice and explanation, and human alternatives.
Companies must also watch US state laws. California has moved to regulate algorithmic decision-making, and Illinois requires firms to tell job applicants if AI is used in video interviews. Failing to comply can bring fines and lawsuits.
Regulators in New York City now require audits for AI systems used in hiring. The audits must show whether the system gives fair results in gender and race groups, and employers must also notify applicants when automation is used.
Compliance is more than just avoiding penalties – it is also about establishing trust. Firms that can show that their systems are fair and accountable are more likely to win support from users and regulators.
How to build fairer systems
Ethics in automation doesn’t happen by chance. It takes planning, the right tools, and ongoing attention. Bias and fairness must be built into the process from the start, not bolted on later. That entails setting goals, choosing the right data, and including the right voices at the table.
Doing this well means following a few key strategies:
Conducting bias assessments
The first step in overcoming bias is to find it. Bias assessments should be performed early and often, from development to deployment, to ensure that systems do not produce unfair outcomes. Metrics might include error rates in groups or decisions that have a greater impact on one group than others.
Bias audits should be performed by third parties when possible. Internal reviews can miss key issues or lack independence, and transparency in objective audit processes builds public trust.
Implementing diverse data sets
Diverse training data helps reduce bias by including samples from all user groups, especially those often excluded. A voice assistant trained mostly on male voices will work poorly for women, and a credit scoring model that lacks data on low-income users may misjudge them.
Data diversity also helps models adapt to real-world use. Users come from different backgrounds, and systems should reflect that. Geographic, cultural, and linguistic variety all matter.
Diverse data isn’t enough on its own – it must also be accurate and well-labelled. Garbage in, garbage out still applies, so teams need to check for errors and gaps, and correct them.
Promoting inclusivity in design
Inclusive design involves the people affected. Developers should consult with users, especially those at risk of harm (or those who might, by using biased AI, cause harm), as this helps uncover blind spots. That might mean involving advocacy groups, civil rights experts, or local communities in product reviews. It means listening before systems go live, not after complaints roll in.
Inclusive design also means cross-disciplinary teams. Bringing in voices from ethics, law, and social science can improve decision-making, as these teams are more likely to ask different questions and spot risks.
Teams should be diverse too. People with different life experiences spot different issues, and a system built by a homogenous group may overlook risks others would catch.
What companies are doing right
Some firms and agencies are taking steps to address AI bias and improve compliance.
Between 2005 and 2019, the Dutch Tax and Customs Administration wrongly accused around 26,000 families of fraudulently claiming childcare benefits. An algorithm used in the fraud detection system disproportionately targeted families with dual nationalities and low incomes. The fallout led to public outcry and the resignation of the Dutch government in 2021.
LinkedIn has faced scrutiny over gender bias in its job recommendation algorithms. Research from MIT and other sources found that men were more likely to be matched with higher-paying leadership roles, partly due to behavioural patterns in how users applied for jobs. In response, LinkedIn implemented a secondary AI system to ensure a more representative pool of candidates.
Another example is the New York City Automated Employment Decision Tool (AEDT) law, which took effect on January 1, 2023, with enforcement starting on July 5, 2023. The law requires employers and employment agencies using automated tools for hiring or promotion to conduct an independent bias audit in one year of use, publicly disclose a summary of the results, and notify candidates at least 10 business days in advance, rules which aim to make AI-driven hiring more transparent and fair.
Aetna, a health insurer, launched an internal review of its claim approval algorithms, and found that some models led to longer delays for lower-income patients. The company changed how data was weighted and added more oversight to reduce this gap.
The examples show that AI bias can be addressed, but it takes effort, clear goals, and strong accountability.
Where we go from here
Automation is here to stay, but trust in systems depends on fairness of results and clear rules. Bias in AI systems can cause harm and legal risk, and compliance is not a box to check – it’s part of doing things right.
Ethical automation starts with awareness. It takes strong data, regular testing, and inclusive design. Laws can help, but real change also depends on company culture and leadership.
(Photo from Pixabay)
See also: Why the Middle East is a hot place for global tech investments
Tumblr media
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
0 notes
margoindustries · 4 months ago
Text
Boost Business Efficiency with Industrial Automation Products | Margo Industries
From automated tools to advanced industrial machinery, Margo Industries offers automation solutions that improve business efficiency and performance. Explore our reliable products today.
1 note · View note
gagande · 8 months ago
Text
Purecode | The use of automated tools like ‘ts-migrate’
The use of automated tools like ‘ts-migrate’ can facilitate the transition of a JavaScript codebase to TypeScript, streamlining the integration process. To maintain code integrity when incorporating TypeScript, liberal use of type annotations is crucial as they provide documentation and assist with error-checking. However, a significant challenge when transitioning to TypeScript is the requirement to compile the code to JavaScript, since browsers cannot execute TypeScript directly.
0 notes
thinkprile · 2 years ago
Text
0 notes
bixels · 6 months ago
Note
As cameras becomes more normalized (Sarah Bernhardt encouraging it, grifters on the rise, young artists using it), I wanna express how I will never turn to it because it fundamentally bores me to my core. There is no reason for me to want to use cameras because I will never want to give up my autonomy in creating art. I never want to become reliant on an inhuman object for expression, least of all if that object is created and controlled by manufacturing companies. I paint not because I want a painting but because I love the process of painting. So even in a future where everyone’s accepted it, I’m never gonna sway on this.
if i have to explain to you that using a camera to take a picture is not the same as using generative ai to generate an image then you are a fucking moron.
#ask me#anon#no more patience for this#i've heard this for the past 2 years#“an object created and controlled by companies” anon the company cannot barge into your home and take your camera away#or randomly change how it works on a whim. you OWN the camera that's the whole POINT#the entire point of a camera is that i can control it and my body to produce art. photography is one of the most PHYSICAL forms of artmakin#you have to communicate with your space and subjects and be conscious of your position in a physical world.#that's what makes a camera a tool. generative ai (if used wholesale) is not a tool because it's not an implement that helps you#do a task. it just does the task for you. you wouldn't call a microwave a “tool”#but most importantly a camera captures a REPRESENTATION of reality. it captures a specific irreproducible moment and all its data#read Roland Barthes: Studium & Punctum#generative ai creates an algorithmic IMITATION of reality. it isn't truth. it's the average of truths.#while conceptually that's interesting (if we wanna get into media theory) but that alone should tell you why a camera and ai aren't the sam#ai is incomparable to all previous mediums of art because no medium has ever solely relied on generative automation for its creation#no medium of art has also been so thoroughly constructed to be merged into online digital surveillance capitalism#so reliant on the collection and commodification of personal information for production#if you think using a camera is “automation” you have worms in your brain and you need to see a doctor#if you continue to deny that ai is an apparatus of tech capitalism and is being weaponized against you the consumer you're delusional#the fact that SO many tumblr lefists are ready to defend ai while talking about smashing the surveillance state is baffling to me#and their defense is always “well i don't engage in systems that would make me vulnerable to ai so if you own an apple phone that's on you”#you aren't a communist you're just self-centered
629 notes · View notes
kirby-the-gorb · 5 months ago
Text
Tumblr media
139 notes · View notes
feetpiclovers · 14 days ago
Text
youtube
Ready to level up your FeetFinder game in 2025? In this video, I’m showing how AI tools are changing the game for every content creator out there — especially those running a faceless YouTube channel or building a strong digital brand.
Learn how to use ChatGPT to write high-converting chat GPT captions, schedule content using Metricool or Later, and automate your inbox with smart chatbot templates, chat automation, and DM automation tools. Whether you’re focused on AI marketing, growing your social media management system, or just saving time with content automation, this guide is packed with tips.
I’ll walk you through building a powerful digital persona, managing a full creative workflow, and even launching a blog or AI blog for foot care, including content like how to massage feet and weekly foot affirmations. Plus, we’ll talk niche research, AI captions, AI thumbnails, and how to stay authentic while scaling.
If you're serious about using AI for content creators or AI for content creation, this one’s for you. Smash that like button, hit subscribe, and let’s build smarter, not harder.
21 notes · View notes
wahroh · 5 months ago
Text
Tumblr media
Vindicitiveness.
20 notes · View notes
wahoo-stomp · 2 days ago
Text
I spent five years coming up with unique ways to photograph the same group of plushies to help tell a story.
You don't need AI to help you be creative, you're just being lazy and want brain chemicals without doing any of the work or respecting the people who put time and effort into it.
10 notes · View notes
Text
Claiming those without sufficient technological or life extension access are proven criminals or non-citizens or are artificial simulations resembling life that do not need technological access or to have data recorded in relation to them. Criminals claiming their victims are merely automated. Automatics. Automated.
35 notes · View notes
pizzafishandchips · 4 months ago
Text
Entering my LinkedIn whore era. Unfortunately.
7 notes · View notes
wavetapper · 5 months ago
Text
the "ai was supposed to do our dishes not make art/do fun things for us" line is really funny wrt 3d modelling because yeah there are a ton of text-to-model ai programs now but you still have to sit there and manually retopologise them if you want anything usable
7 notes · View notes
radioregine · 6 months ago
Text
entering my #gifmaking era i think
10 notes · View notes
virtual-boy · 8 months ago
Text
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
[⚙️][⚙️][⚙️] [🔧][🔧][🔧] [🔩][🔩][🔩]
"Move along, interloper!"
["Move along, interloper!"]
snorpy fizzlebean from bugsnax stimboard with fursuit paws, machinery and tools!
[requested by @baby-snoopy, tysm 4 requestiiingg!!! >:03]
17 notes · View notes
httpdss · 11 months ago
Text
🚀 Introducing STRUCT: Automated Project Structure Generator!
Organize your projects effortlessly with Struct. From CI/CD pipelines to Docker setups, Struct ensures your repo follows best practices.
🔗 Check it out on GitHub and start contributing today! No one needs to know this, but it was build from ground up with ChatGPT-4o
11 notes · View notes